We extend the work of Narasimhan and Bilmes [30] for minimizing set functionsrepresentable as a difference between submodular functions. Similar to [30],our new algorithms are guaranteed to monotonically reduce the objectivefunction at every step. We empirically and theoretically show that theper-iteration cost of our algorithms is much less than [30], and our algorithmscan be used to efficiently minimize a difference between submodular functionsunder various combinatorial constraints, a problem not previously addressed. Weprovide computational bounds and a hardness result on the mul- tiplicativeinapproximability of minimizing the difference between submodular functions. Weshow, however, that it is possible to give worst-case additive bounds byproviding a polynomial time computable lower-bound on the minima. Finally weshow how a number of machine learning problems can be modeled as minimizing thedifference between submodular functions. We experimentally show the validity ofour algorithms by testing them on the problem of feature selection withsubmodular cost features.
展开▼